Universal Adversarial Examples in Remote Sensing: Methodology and Benchmark

نویسندگان

چکیده

Deep neural networks have achieved great success in many important remote sensing tasks. Nevertheless, their vulnerability to adversarial examples should not be neglected. In this study, we systematically analyze the Universal Adversarial Examples Remote Sensing (UAE-RS) data for first time, without any knowledge from victim model. Specifically, propose a novel black-box attack method, namely, Mixup-Attack, and its simple variant Mixcut-Attack, data. The key idea of proposed methods is find common vulnerabilities among different by attacking features shallow layer given surrogate Despite simplicity, can generate transferable that deceive most state-of-the-art deep both scene classification semantic segmentation tasks with high rates. We further provide generated universal dataset named UAE-RS, which provides samples field. hope UAE-RS may serve as benchmark helps researchers design strong resistance toward attacks Codes are available online (https://github.com/YonghaoXu/UAE-RS).

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Data fusion in remote sensing: examples

Remotely sensed data provided information on Earth phenomena in different modalities, spectrally, spatially and temporally inhomogeneous. But all this set of data proposes different representations of the same physical environment. How to handle these different representations, how to extract the best of a combination of them? Data fusion is one of the answers to the best possible use of this s...

متن کامل

Bayesian Methodology for Ocean-color Remote Sensing

The inverse ocean color problem, i.e., the retrieval of marine reflectance from top-of-atmosphere (TOA) reflectance, is examined in a Bayesian context. The solution is expressed as a probability distribution that measures the likelihood of encountering specific values of the marine reflectance given the observed TOA reflectance. This conditional distribution, the posterior distribution, allows ...

متن کامل

Generating Adversarial Examples with Adversarial Networks

Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results. Different attack strategies have been proposed to generate adversarial examples, but how to produce them with high perceptual quality and more efficiently requires mor...

متن کامل

Semantic Adversarial Examples

Deep neural networks are known to be vulnerable to adversarial examples, i.e., images that are maliciously perturbed to fool the model. Generating adversarial examples has been mostly limited to finding small perturbations that maximize the model prediction error. Such images, however, contain artificial perturbations that make them somewhat distinguishable from natural images. This property is...

متن کامل

Spatially Transformed Adversarial Examples

Recent studies show that widely used deep neural networks (DNNs) are vulnerable to carefully crafted adversarial examples. Many advanced algorithms have been proposed to generate adversarial examples by leveraging the Lp distance for penalizing perturbations. Researchers have explored different defense methods to defend against such adversarial attacks. While the effectiveness of Lp distance as...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Geoscience and Remote Sensing

سال: 2022

ISSN: ['0196-2892', '1558-0644']

DOI: https://doi.org/10.1109/tgrs.2022.3156392